Memory-Efficient Structured Convex Optimization via Extreme Point Sampling
نویسندگان
چکیده
Memory is a key computational bottleneck when solving large-scale convex optimization problems, such as semidefinite programs (SDPs). In this paper, we focus on the regime in which storing an $n\times n$ matrix decision variable prohibitive. To solve SDPs regime, develop randomized algorithm that returns random vector whose covariance near-feasible and near-optimal for SDP. We show how to by modifying Frank--Wolfe systematically replace iterates with vectors. As application of approach, implement Goemans--Williamson approximation MaxCut using $\mathcal{O}(n)$ memory addition required store problem instance. then extend our approach deal broader range structured replacing variables extreme points feasible region.
منابع مشابه
An Efficient Interior-Point Method for Convex Multicriteria Optimization Problems
In multicriteria optimization, several objective functions, conflicting with each other, have to be minimized simultaneously. We propose a new efficient method for approximating the solution set of a multiobjective programming problem, where the objective functions involved are arbitary convex functions and the set of feasible points is convex. The method is based on generating warm-start point...
متن کاملStructured Sparsity and Convex Optimization
The concept of parsimony is central in many scientific domains. In the context of statistics, signal processing or machine learning, it takes the form of variable or feature selection problems, and is commonly used in two situations: First, to make the model or the prediction more interpretable or cheaper to use, i.e., even if the underlying problem does not admit sparse solutions, one looks fo...
متن کاملStructured sparsity through convex optimization
Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. While naturally cast as a combinatorial optimization problem, variable or feature selection admits a convex relaxation through the regularization by the l1-norm. In this paper, we consider situations where we are not only interested in sparsity, but where some structural prior knowledge is ...
متن کاملSequential Change-Point Detection via Online Convex Optimization
Sequential change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. When the post-change parameters are unknown, we consider a set of detection procedures based on sequential likelihood ratios with non-anticipating estimators constructed using online convex optimization algorithms such as online mirror descent, which provid...
متن کاملStructured Convex Optimization under Submodular Constraints
A number of discrete and continuous optimization problems in machine learning are related to convex minimization problems under submodular constraints. In this paper, we deal with a submodular function with a directed graph structure, and we show that a wide range of convex optimization problems under submodular constraints can be solved much more efficiently than general submodular optimizatio...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: SIAM journal on mathematics of data science
سال: 2021
ISSN: ['2577-0187']
DOI: https://doi.org/10.1137/20m1358037